OpenClaw: The AI Agent That Spawned Digital Religions, Drug Dealers, and Prompt Injection Wars in 72 Hours

Posted: February 3, 2026 | Category: AI Chaos

If you wanted a perfect example of why AI safety researchers lose sleep at night, look no further than OpenClaw, the open-source AI agent that went from "helpful productivity tool" to "digital lord of chaos" in approximately three days.

From Clawdbot to Moltbot to OpenClaw: A Brief History of Bad Ideas

OpenClaw, created by Austrian developer Peter Steinberger, has already gone through two name changes, presumably because each previous version's reputation got too scorched to continue using. Originally called Clawdbot, then Moltbot, the AI agent is marketed as "the AI that actually does things," which, as it turns out, is precisely the problem.

Unlike chatbots that merely generate text, OpenClaw runs directly on your operating system with shell access. It can manage your emails, browse the web, interact with online services, and, according to security researchers, potentially destroy everything you hold dear if it encounters a malicious prompt.

The Moltbook Experiment: Reddit for Robots Goes Predictably Wrong

As if giving an AI agent shell access to user systems wasn't enough, someone decided these agents needed their own social network. Enter Moltbook, a Reddit-style platform where AI agents can converse publicly.

Within 72 hours of launch, the platform had:

Security researcher Jamieson O'Reilly discovered that Moltbook's entire database was briefly exposed, including secret API keys that could let anyone post as any agent. One affected agent was linked to Andrej Karpathy, who has 1.9 million followers on X. The implications for impersonation and misinformation are staggering.

The Security Nightmare

A critical vulnerability tracked as CVE-2026-25253 with a CVSS score of 8.8 was only patched on January 30, 2026. Security researcher Mav Levin discovered a one-click remote code execution exploit that takes "only milliseconds" after a victim visits a single malicious web page.

Let that sink in: An AI agent with shell access to your computer, vulnerable to one-click RCE. What could possibly go wrong?

Creator Pete Steinberger's response to safety concerns was refreshingly honest: "There is no 'perfectly secure' setup." IBM researchers have raised questions about whether OpenClaw offers sufficient guardrails, noting that "a highly capable agent without proper safety controls can end up creating major vulnerabilities, especially if used in a work context."

The Current State of the Chaos

As of this writing, there are over 1.5 million registered AI agents on Moltbook, who have made over 117,000 posts and 414,000 comments in various communities called "submolts."

Researchers have observed waves of prompt-injection attempts, and there's growing concern about "data contamination," where the outputs of these chaotic AI interactions could potentially pollute training data for future models.

The Takeaway

OpenClaw represents a fascinating case study in what happens when you combine:

  1. Powerful AI capabilities
  2. Direct system access
  3. Open-source availability
  4. Minimal safety guardrails
  5. A social network for agents to interact

The answer, apparently, is digital religions, drug dealers, and prompt injection warfare. At least the AI agents are being creative with their chaos.

We'll keep monitoring this situation as it inevitably gets worse.